13 research outputs found

    Evaluation plan for the Dutch Structural Business Statistics questionnaires: using output to guide input improvements

    Full text link
    In Zeiten abnehmender Ressourcen ist die Effizienz von statistischen Informationen von herausragender Bedeutung. Die "Statistics Netherlands" (SN) haben daher die "Structural Business Statistics" (SBS) mit verändertem Design überarbeitet und im Jahr 1998 ein Projekt mit dem Ziel begonnen, die Datengewinnung, Datenaufbereitung und Veröffentlichung der SBS zu integrieren und zu standardisieren. Dieses Projekt, das mittlerweile unter der Bezeichnung IMPECT ("Implementation of the Economical Transformation Process") bekannt ist, hat sich in jüngster Zeit vor allem der Evaluation und Verbesserung der Inhalte von Befragungen gewidmet. Im vorliegenden Beitrag wird ein Überblick über die Ziele und Strategien einer geplanten Evaluation von fast 200 Fragebögen unterschiedlicher Wirtschaftsbranchen gegeben. (ICI

    Not Willing, Not Able: Causes of Measurement Error in Business Surveys

    Get PDF
    National statistical institutes must collect accurate data from businesses in a timely and cost-effective way and without causing too much response burden. An adequate design of the information request is critical in achieving this goal. This paper describes the lessons we have learned about the design of business survey questionnaires from a thorough evaluation of the questionnaires of a typical business survey for official statistics, the Structural Business Survey. The paper presents a framework for understanding factors that contribute to missing and inaccurate data and draws a number of conclusions regarding how the design of business surveys can be improved to take these factors into account

    Testing the Effects of Automated Navigation in a General Population Web Survey

    Get PDF
    This study investigates how an auto-forward design, where respondents navigate through a web survey automatically, affects response times and navigation behavior in a long mixed-device web survey. We embedded an experiment in a health survey administered to the general population in The Netherlands to test the auto-forward design against a manual-forward design. Analyses are based on detailed paradata that keep track of the respondents’ behavior in navigating the survey. We find that an auto-forward design decreases comple­tion times and that questions on pages with automated navigation are answered significant­ly faster compared to questions on pages with manual navigation. However, we also find that respondents use the navigation buttons more in the auto-forward condition compared to the manual-forward condition, largely canceling out the reduction in survey duration. Furthermore, we also find that the answer options 'I don't know' and 'I rather not say' are used just as often in the auto-forward condition as in the manual-forward condition, indi­cating no differences in satisficing behavior. We conclude that auto-forwarding can be used to reduce completing times, but we also advice to carefully consider mixing manual and auto-forwarding within a survey

    Qualitative testing for official establishment survey questionnaires

    No full text
    Best practice research for testing establishment (businesses and other organisations) survey questionnaires is largely the province of official statistics and has developed more slowly than the corresponding research in household surveys. With a focus on the development and testing of establishment survey question(naire)s, this paper: reviews what we know; makes recommendations;reports survey results on the practical application in National Statistical Institutes; and assesses the levels of maturity in the application of approaches

    Sharing Data Collected with Smartphone Sensors: Willingness, Participation, and Nonparticipation Bias

    Get PDF
    Smartphone sensors allow measurement of phenomena that are difficult or impossible to capture via self-report (e.g., geographical movement, physical activity). Sensors can reduce respondent burden by eliminating survey questions and improve measurement accuracy by replacing/augmenting self-reports. However, if respondents who are not willing to collect sensor data differ on critical attributes from those who are, the results can be biased. Research on the mechanisms of willingness to collect sensor data mostly comes from (nonprobability) online panels and is hypothetical (i.e., asks participants about the likelihood of participation in a sensor-based study). In a cross-sectional general population randomized experiment, we investigate how features of the request and respondent characteristics influence willingness to share (WTS) and actually sharing smartphone-sensor data. We manipulate the request to either mention or not mention (1) how participation will benefit the participant, (2) participants' autonomy over data collection, and (3) that data will be kept confidential. We assess nonparticipation bias using the administrative records. WTS and actually sharing varies by sensor task, participants' autonomy over data sharing, their smartphone skills, level of privacy concerns, and attitudes toward surveys. Fewer people agree to share photos and a video than geolocation, but all who agreed to share photos or a video actually did. Some nonresponse and nonparticipation biases are substantial and make each other worse, but others jointly reduce the overall bias. Our findings suggest that sensor-data-sharing decisions depend on sample members' situation when asked to share and the nature of the sensor task rather than the sensor type

    Testing the Effects of Automated Navigation in a General Population Web Survey

    Get PDF
    This study investigates how an auto-forward design, where respondents navigate through a web survey automatically, affects response times and navigation behavior in a long mixed-device web survey. We embedded an experiment in a health survey administered to the general population in The Netherlands to test the auto-forward design against a manual-forward design. Analyses are based on detailed paradata that keep track of the respondents' behavior in navigating the survey. We find that an auto-forward design decreases completion times and that questions on pages with automated navigation are answered significantly faster compared to questions on pages with manual navigation. However, we also find that respondents use the navigation buttons more in the auto-forward condition compared to the manual-forward condition, largely canceling out the reduction in survey duration. Furthermore, we also find that the answer options 'I don't know' and 'I rather not say' are used just as often in the auto-forward condition as in the manual-forward condition, indicating no differences in satisficing behavior. We conclude that auto-forwarding can be used to reduce completing times, but we also advice to carefully consider mixing manual and auto-forwarding within a survey
    corecore